- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources3
- Resource Type
-
0000100002000000
- More
- Availability
-
12
- Author / Contributor
- Filter by Author / Creator
-
-
Arcobello, Jonathan (1)
-
Ayub, Umair (1)
-
Bakouny, Ziad (1)
-
Baral, Chitta (1)
-
Bitterman, Danielle S (1)
-
Bopardikar, Anushree (1)
-
Cassidy, Michael (1)
-
Celi, Leo_Anthony_G (1)
-
Choueiri, Toni K. (1)
-
De_Matos, Joao_Carlos_R_G (1)
-
Doroshow, Deborah B. (1)
-
Egan, Pamela C. (1)
-
Farmakiotis, Dimitrios (1)
-
Fecher, Leslie A. (1)
-
Fraser, Hamish (1)
-
Friese, Christopher R. (1)
-
Galsky, Matthew D. (1)
-
Gichoya, Judy_Wawira (1)
-
Goel, Sanjay (1)
-
Grivas, Petros (1)
-
- Filter by Editor
-
-
null (1)
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
What do high school students learn from a two-day datathon during which they tackle data to visualize the impact of biased data on healthcare decisions? How do they interact with their team of high school students, data scientists, clinicians, and teachers? What did we, the developers and leaders of the datathon, learn? How would we approach it differently next year? Our goal is to answer these questions plus share lessons learned. We will then divide the audience into teams to brainstorm ways to approach and solve some of the problems we experienced and hopefully recruit some audience members to participate in our June 2025 Brown University Health Artificial Intelligence (AI) Systems Thinking for Equity (HASTE) Datathon in Providence, Rhode Island (Brown University Datathon, 2024).more » « lessFree, publicly-accessible full text available February 17, 2026
-
Khan, Muhammad Ali; Ayub, Umair; Naqvi, Syed_Arsalan Ahmed; Khakwani, Kaneez_Zahra Rubab; Sipra, Zaryab_bin Riaz; Raina, Ammad; Zhou, Sihan; He, Huan; Saeidi, Amir; Hasan, Bashar; et al (, Journal of the American Medical Informatics Association)Abstract ObjectiveData extraction from the published literature is the most laborious step in conducting living systematic reviews (LSRs). We aim to build a generalizable, automated data extraction workflow leveraging large language models (LLMs) that mimics the real-world 2-reviewer process. Materials and MethodsA dataset of 10 trials (22 publications) from a published LSR was used, focusing on 23 variables related to trial, population, and outcomes data. The dataset was split into prompt development (n = 5) and held-out test sets (n = 17). GPT-4-turbo and Claude-3-Opus were used for data extraction. Responses from the 2 LLMs were considered concordant if they were the same for a given variable. The discordant responses from each LLM were provided to the other LLM for cross-critique. Accuracy, ie, the total number of correct responses divided by the total number of responses, was computed to assess performance. ResultsIn the prompt development set, 110 (96%) responses were concordant, achieving an accuracy of 0.99 against the gold standard. In the test set, 342 (87%) responses were concordant. The accuracy of the concordant responses was 0.94. The accuracy of the discordant responses was 0.41 for GPT-4-turbo and 0.50 for Claude-3-Opus. Of the 49 discordant responses, 25 (51%) became concordant after cross-critique, increasing accuracy to 0.76. DiscussionConcordant responses by the LLMs are likely to be accurate. In instances of discordant responses, cross-critique can further increase the accuracy. ConclusionLarge language models, when simulated in a collaborative, 2-reviewer workflow, can extract data with reasonable performance, enabling truly “living” systematic reviews.more » « lessFree, publicly-accessible full text available January 21, 2026
-
Rivera, Donna R.; Peters, Solange; Panagiotou, Orestis A.; Shah, Dimpy P.; Kuderer, Nicole M.; Hsu, Chih-Yuan; Rubinstein, Samuel M.; Lee, Brendan J.; Choueiri, Toni K.; de Lima Lopes, Gilberto; et al (, Cancer Discovery)null (Ed.)
An official website of the United States government
